旨在用自然语言和谐地与人类交流的智能对话体系对于促进人工智能时代的人机互动的发展非常出色。有了逐渐复杂的人类计算机交互要求(例如,多模式输入,时间敏感性),传统的基于文本的对话系统很难满足对更加生动和方便的交互的需求。因此,视觉背景增强对话系统(VAD)有可能通过感知和理解多模式信息(即图像或视频中的视觉上下文,文本对话历史记录)与人类进行交流,已成为主要的研究范式。 VAD受益于视觉和文本上下文之间的一致性和互补性,具有产生引人入胜和背景感知响应的潜力。为了描述VAD的开发,我们首先表征VAD的概念和独特功能,然后介绍其通用系统体系结构以说明系统工作流程。随后,对一些研究挑战和代表性作品进行了详细研究,然后进行了权威基准摘要。我们通过提出一些开放问题和有前途的VAD研究趋势来结束本文,例如,在跨模式对话环境下,人机对话的认知机制以及知识增强的跨模式语义互动。
translated by 谷歌翻译
由于大型数据集中的深度学习模型需要大量时间和资源,因此希望构建一个小型合成数据集,我们可以通过该数据集充分训练深度学习模型。最近有一些作品通过复杂的BI级优化探索了有关凝结图像数据集的解决方案。例如,数据集冷凝(DC)匹配网络梯度W.R.T.大型数据和小合成数据,在每个外迭代处,网络权重优化了多个步骤。但是,现有方法具有其固有的局限性:(1)它们不直接适用于数据离散的图表; (2)由于所涉及的嵌套优化,冷凝过程在计算上昂贵。为了弥合差距,我们研究了针对图形数据集量身定制的有效数据集冷凝,在该数据集中我们将离散图结构模拟为概率模型。我们进一步提出了一个单步梯度匹配方案,该方案仅执行一个步骤,而无需训练网络权重。我们的理论分析表明,该策略可以生成合成图,从而导致实际图上的分类损失降低。各种图数据集的广泛实验证明了该方法的有效性和效率。特别是,我们能够将数据集大小降低90%,同时大约98%的原始性能,并且我们的方法明显快于多步梯度匹配(例如,CIFAR10中的15倍用于合成500个图)。
translated by 谷歌翻译
生成的对抗网络(GANS)已被证明在图像生成任务中非常成功,但GaN培训具有不稳定问题。许多作品通过手动修改GaN架构提高了GaN训练的稳定性,这需要人类专业知识和广泛的试验和错误。因此,目的是自动化模型设计的神经结构搜索(NAS)已经应用于在无条件图像生成的任务上搜索GAN。早期的NAS-GaN仅用于搜索生成器来减少困难。最近的一些作品试图搜索发电机(G)和鉴别器(D)来提高GaN性能,但它们仍然遭受搜索过程中GaN培训的不稳定性。为了缓解不稳定问题,我们提出了一种高效的两阶段进化算法(EA)基于NAS框架来发现GANS,Dubbed \ TextBF {eagan}。具体而言,我们将G和D的搜索分成两个阶段,提出了重量重置策略以提高GaN训练的稳定性。此外,我们执行进展操作以基于多个目标生成帕累托 - 前部架构,导致G和D的优越组合。通过利用重量分享策略和低保真评估,EAGAN可以显着缩短搜索时间。 EAGAN在CIFAR-10上实现了高竞争力的结果(= 8.81 $ \ PM $ 0.10,FID = 9.91),并超越了STL-10数据集上的先前NAS搜索的GAN(= 10.44 $ \ PM $ 0.087,FID = 22.18)。
translated by 谷歌翻译
近年来,图像识别应用程序已迅速发展。在不同的领域中出现了大量的研究和技术,例如人脸识别,行人和车辆重新识别,地标检索和产品识别。在本文中,我们提出了一种实用的轻质图像识别系统,名为PP-Shitu,包括以下3个模块,主体检测,特征提取和矢量搜索。我们介绍了公制学习,深哈希,知识蒸馏和模型量化,包括提高精度和推理速度的流行策略。具有上述策略,PP-Shitu在不同的场景中运行良好,其中一组模型在混合数据集上培训。不同数据集和基准测试的实验表明,该系统在图像识别的不同域中广泛有效。所有上述型号都是开放的,并且代码在PaddlePaddle上的GitHub存储库Paddleclas中提供。
translated by 谷歌翻译
本文考虑了具有一般非线性约束的随机线性匪徒。目标是通过每轮$ \ Tau \ Leq T $的一组限制来最大化预期的累计奖励。我们提出了一种悲观的乐观乐观算法,其在两个方面有效。首先,算法产生$ \ tilde {\ cal o} \ left(\ left(\ frac {k ^ {0.75}} {\ delta}} {\ delta} + d \ over)\ sqrt {\ tau} \右)$(伪)在圆形$ \ tau \ leq t,$ k $的遗憾,$ k $是约束的数量,$ d $是奖励功能空间的尺寸,$ \ delta $ in是slater的常数;在任何圆形$ \ tau> \ tau'中的零限制违规,$ \ tau' $独立于地平线$ t. $ the $秒,算法是计算效率的。我们的算法基于优化中的原始方法,包括两个组件。原始分量类似于无约束的随机线性匪徒(我们的算法使用线性上置信界限算法(Linucb))。双组分的计算复杂性取决于约束的数量,而是与上下文空间,动作空间和特征空间的大小无关。因此,我们的算法的整体计算复杂性类似于线性UCB的线性UCB,用于无约束随机线性匪徒。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Recently the deep learning has shown its advantage in representation learning and clustering for time series data. Despite the considerable progress, the existing deep time series clustering approaches mostly seek to train the deep neural network by some instance reconstruction based or cluster distribution based objective, which, however, lack the ability to exploit the sample-wise (or augmentation-wise) contrastive information or even the higher-level (e.g., cluster-level) contrastiveness for learning discriminative and clustering-friendly representations. In light of this, this paper presents a deep temporal contrastive clustering (DTCC) approach, which for the first time, to our knowledge, incorporates the contrastive learning paradigm into the deep time series clustering research. Specifically, with two parallel views generated from the original time series and their augmentations, we utilize two identical auto-encoders to learn the corresponding representations, and in the meantime perform the cluster distribution learning by incorporating a k-means objective. Further, two levels of contrastive learning are simultaneously enforced to capture the instance-level and cluster-level contrastive information, respectively. With the reconstruction loss of the auto-encoder, the cluster distribution loss, and the two levels of contrastive losses jointly optimized, the network architecture is trained in a self-supervised manner and the clustering result can thereby be obtained. Experiments on a variety of time series datasets demonstrate the superiority of our DTCC approach over the state-of-the-art.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
With the increase in health consciousness, noninvasive body monitoring has aroused interest among researchers. As one of the most important pieces of physiological information, researchers have remotely estimated the heart rate (HR) from facial videos in recent years. Although progress has been made over the past few years, there are still some limitations, like the processing time increasing with accuracy and the lack of comprehensive and challenging datasets for use and comparison. Recently, it was shown that HR information can be extracted from facial videos by spatial decomposition and temporal filtering. Inspired by this, a new framework is introduced in this paper to remotely estimate the HR under realistic conditions by combining spatial and temporal filtering and a convolutional neural network. Our proposed approach shows better performance compared with the benchmark on the MMSE-HR dataset in terms of both the average HR estimation and short-time HR estimation. High consistency in short-time HR estimation is observed between our method and the ground truth.
translated by 谷歌翻译